Picture for Katsushi Ikeuchi

Katsushi Ikeuchi

IK Seed Generator for Dual-Arm Human-like Physicality Robot with Mobile Base

Add code
May 01, 2025
Viaarxiv icon

RL-Driven Data Generation for Robust Vision-Based Dexterous Grasping

Add code
Apr 25, 2025
Viaarxiv icon

A Taxonomy of Self-Handover

Add code
Apr 08, 2025
Viaarxiv icon

Plan-and-Act using Large Language Models for Interactive Agreement

Add code
Apr 01, 2025
Viaarxiv icon

VLM-driven Behavior Tree for Context-aware Task Planning

Add code
Jan 07, 2025
Viaarxiv icon

Modality-Driven Design for Multi-Step Dexterous Manipulation: Insights from Neuroscience

Add code
Dec 15, 2024
Viaarxiv icon

APriCoT: Action Primitives based on Contact-state Transition for In-Hand Tool Manipulation

Add code
Jul 16, 2024
Viaarxiv icon

Designing Library of Skill-Agents for Hardware-Level Reusability

Add code
Mar 04, 2024
Figure 1 for Designing Library of Skill-Agents for Hardware-Level Reusability
Figure 2 for Designing Library of Skill-Agents for Hardware-Level Reusability
Figure 3 for Designing Library of Skill-Agents for Hardware-Level Reusability
Figure 4 for Designing Library of Skill-Agents for Hardware-Level Reusability
Viaarxiv icon

Agent AI: Surveying the Horizons of Multimodal Interaction

Add code
Jan 07, 2024
Figure 1 for Agent AI: Surveying the Horizons of Multimodal Interaction
Figure 2 for Agent AI: Surveying the Horizons of Multimodal Interaction
Figure 3 for Agent AI: Surveying the Horizons of Multimodal Interaction
Figure 4 for Agent AI: Surveying the Horizons of Multimodal Interaction
Viaarxiv icon

GPT-4V for Robotics: Multimodal Task Planning from Human Demonstration

Add code
Nov 20, 2023
Figure 1 for GPT-4V for Robotics: Multimodal Task Planning from Human Demonstration
Figure 2 for GPT-4V for Robotics: Multimodal Task Planning from Human Demonstration
Figure 3 for GPT-4V for Robotics: Multimodal Task Planning from Human Demonstration
Figure 4 for GPT-4V for Robotics: Multimodal Task Planning from Human Demonstration
Viaarxiv icon